1,365 research outputs found

    Studies of G-Protein Coupled Receptors Incorporated Into a Novel, Nanoscale, Membrane-Mimetic System

    Get PDF
    From first principles of phospholipid – apolipoprotein A-I (apo A-I) interactions, we hypothesized that the amino acid sequence of apo A-I derived from a different species may exhibit improved properties compared to human apo A-I for the purpose of incorporation of G protein-coupled receptors (GPCRs) into homogeneous discoidal lipoprotein particles. We generated apolipoprotein A-I DNA derived from zebrafish (Danio rerio) using molecular cloning techniques and optimized a heterologous bacterial expression system, and protein purification and labeling scheme for obtaining high-yields of pure zebrafish apo A-I. We demonstrated that zebrafish apo A-I forms stable, homogeneous discoidal lipoprotein particles, which we termed as Nanoscale Apolipoprotein Bound Bilayers (NABBs). Using bovine rhodopsin as a model system, we developed a novel method of rapidly incorporating GPCRs into NABBs – a significant improvement over traditional methods – requiring less time and materials for receptor reconstitution. The method was generalized for incorporation of heterologously expressed GPCRs in mammalian cells available at comparatively lower levels of receptor concentration and purity. A novel ELISA technique was developed for high-throughput quantification of the incorporated GPCR in NABBs. We also developed methods to control the ratio of receptor to NABB and imaged the rhodopsin-NABBs using electron microscopy (EM) to measure stoichiometry and receptor orientation. Using a combination of EM imaging and biochemical analyses, we correlated stability and signaling efficiency of rhodopsin in NABBs with either one or two receptors. We discovered that the specific activity of G protein coupling for single rhodopsins sequestered in individual NABBs was nearly identical to that of two rhodopsins per NABB under conditions where stoichiometry and orientation could be inferred by electron microscopy imaging. Thermal stability of rhodopsin and CCR5 in NABBs was superior to the receptors in various detergents commonly used for membrane protein work. CCR5 in NABBs retained its ability to activate G protein. This work highlights the NABBs as a promising tool for in-vitro manipulation of GPCR stoichiometry and biophysical studies of GPCRs in an isolated, native-like, cell-free system

    Minimum Distance Estimation of Milky Way Model Parameters and Related Inference

    Get PDF
    We propose a method to estimate the location of the Sun in the disk of the Milky Way using a method based on the Hellinger distance and construct confidence sets on our estimate of the unknown location using a bootstrap based method. Assuming the Galactic disk to be two-dimensional, the sought solar location then reduces to the radial distance separating the Sun from the Galactic center and the angular separation of the Galactic center to Sun line, from a pre-fixed line on the disk. On astronomical scales, the unknown solar location is equivalent to the location of us earthlings who observe the velocities of a sample of stars in the neighborhood of the Sun. This unknown location is estimated by undertaking pairwise comparisons of the estimated density of the observed set of velocities of the sampled stars, with densities estimated using synthetic stellar velocity data sets generated at chosen locations in the Milky Way disk according to four base astrophysical models. The "match" between the pair of estimated densities is parameterized by the affinity measure based on the familiar Hellinger distance. We perform a novel cross-validation procedure to establish a desirable "consistency" property of the proposed method.Comment: 25 pages, 10 Figures. This version incorporates the suggestions made by the referees. To appear in SIAM/ASA Journal on Uncertainty Quantificatio

    A New Spatio-Temporal Model Exploiting Hamiltonian Equations

    Full text link
    The solutions of Hamiltonian equations are known to describe the underlying phase space of the mechanical system. In Bayesian Statistics, the only place, where the properties of solutions to the Hamiltonian equations are successfully applied, is Hamiltonian Monte Carlo. In this article, we propose a novel spatio-temporal model using a strategic modification of the Hamiltonian equations, incorporating appropriate stochasticity via Gaussian processes. The resultant sptaio-temporal process, continuously varying with time, turns out to be nonparametric, nonstationary, nonseparable and no-Gaussian. Besides, the lagged correlations tend to zero as the spatio-temporal lag tends to infinity. We investigate the theoretical properties of the new spatio-temporal process, along with its continuity and smoothness properties. Considering the Bayesian paradigm, we derive methods for complete Bayesian inference using MCMC techniques. Applications of our new model and methods to two simulation experiments and two real data sets revealed encouraging performance

    A Simple Model of Endemicity to Analyse Spread and Control of COVID-19 in India

    Get PDF
    A simple model based on 2 parameters, time-dependent infectability and efficacy of containment measures, is written to analyse the spread and containment of an endemic outbreak. Data from the first wave of the outbreak of COVID-19 in India is analysed. Interestingly, growth and decay of infections can be seen as a competition between the ratio of logarithm of infectability and the logarithm of time vis-a-vis the efficacy of containment measures imposed. Containment time estimates are shown to exhibit the viability of the simple model

    Exploring the extent of validity of quantum work fluctuation theorems in the presence of weak measurements

    Full text link
    Quantum work fluctuation theorems are known to hold when the work is defined as the difference between the outcomes of projective measurements carried out on the Hamiltonian of the system at the initial and the final time instants of the experimental realization of the process. A recent study showed that the theorem breaks down if the measurement is of a more general nature, i.e. if a positive operator valued measurement is used, and the deviation vanishes only in the limit where the operators become projective in nature. We study a simple two-state system subjected to a unitary evolution under a Hamiltonian that is linearly dependent on time, and verify the validity of the above statement. We further define a weak value of work and show that the deviation from the exact work fluctuation theorems are much less in this formalism.Comment: 16 pages, 5 figure

    Measuring poverty in India with machine learning and remote sensing

    Full text link
    In this paper, we use deep learning to estimate living conditions in India. We use both census and surveys to train the models. Our procedure achieves comparable results to those found in the literature, but for a wide range of outcomes

    Using Satellite Images and Deep Learning to Measure Health and Living Standards in India

    Get PDF
    Using deep learning with satellite images enhances our understanding of human development at a granular spatial and temporal level. Most studies have focused on Africa and on a narrow set of asset-based indicators. This article leverages georeferenced village-level census data from across 40% of the population of India to train deep models that predicts 16 indicators of human well-being from Landsat 7 imagery. Based on the principles of transfer learning, the census-based model is used as a feature extractor to train another model that predicts an even larger set of developmental variables—over 90 variables—included in two rounds of the National Family Health Survey (NFHS). The census-based-feature-extractor model outperforms the current standard in the literature for most of these NFHS variables. Overall, the results show that combining satellite data with Indian Census data unlocks rich information for training deep models that track human development at an unprecedented geographical and temporal resolution

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis
    corecore